2024-08-21T02:52:51,718473449+00:00
I’ve set up a small cloud vps for my personal needs. Currently, the provider of the vps doesn’t provide native IPv6. I need to be able to use IPv6 on my cloud vps, so I resorted to use the tunnelbroker service to achieve this.
I experienced some slowdown when the cloud vps is accessed via IPv6. Strangely, when the cloud vps is accessed via IPv4, the slowdown problem disappears.
I suspected that there is a problem with the tunnelbroker gateway, but to be sure I try that if another solution exists that can be deployed easily from my end.
After digging some information, I found that TCP has a fundamental of
slow start and then slowly ramping up to speed based on acknowledged
packet and packet loss. There are several algorithms available to
control how the kernel handles TCP traffics. The oldest is
reno
, while the Debian’s default is cubic
.
In 2017, there was effort by one of the big cloud providers to speed up TCP. The result is the TCP BBR Congestion Control algorithm, which has been available in the mainline kernel since version 4.9, but sadly it’s not enabled by default on Debian.
The kernel configuration that is needed for BBR is
CONFIG_TCP_CONG_BBR
. On Debian, the algorithm is compiled
as tcp_bbr
module.
To activate the BBR algorithm, the module needs to be loaded.
modprobe tcp_bbr
sysctl net.ipv4.tcp_congestion_control=bbr
To make the change permanent, these commands are sufficient.
echo tcp_bbr > /etc/modules-load.d/99-tcp-bbr
echo net.ipv4.tcp_congestion_control=bbr > /etc/sysctl.d/98-tcp-bbr.conf
Some information I found also suggested that the tc
algorithm is set to fq
, but I found that the
tc
on my cloud vps is already set to fq_codel
,
which is another fq
variant optimized for wireless like
environment.
echo net.core.default_qdisc=fq > /etc/sysctl.d/99-fq-qdisc
sysctl net.core.default_qdisc=fq
# enable fq on he-ipv6 interface
tc qdisc replace dev he-ipv6 root fq
After enabling the BBR algorithm, the IPv6 connection from my end feels faster. I’ve not performed any benchmark to prove this, but noticing at the network speed meter on the notification bar, I found that TCP BBR is faster at recovering connection up to full speed after some slight drops.